178 research outputs found

    Prime Forms in Possibilistic Logic

    Get PDF
    Possibilistic logic is a weighted logic used to represent uncertain and inconsistent knowledge. Its semantics is often defined by a possibility distribution, which is a function from a set of interpretations to a totally ordered scale. In this paper, we consider a new semantic characteristics of knowledge bases in possibilistic logic (or possibilistic knowledge bases) by a generalized notion of propositional prime implicant, which we call prioritized prime implicant. We first consider several desirable properties of a prioritized prime implicant for characterizing possibilistic knowledge bases. Some examples show that existing generalizations of prime implicant in possibilistic logic do not satisfy all of these properties. We then provide a novel definition of prioritized prime implicant, which is a set of weighted literals that may be inconsistent. We show that the prioritized prime implicants satisfy all the desirable properties. Finally, we discuss the problem of computing prioritized prime implicants of a possibilistic knowledge base

    Prime Forms in Possibilistic Logic

    Get PDF
    Possibilistic logic is a weighted logic used to represent uncertain and inconsistent knowledge. Its semantics is often defined by a possibility distribution, which is a function from a set of interpretations to a totally ordered scale. In this paper, we consider a new semantic characteristics of knowledge bases in possibilistic logic (or possibilistic knowledge bases) by a generalized notion of propositional prime implicant, which we call prioritized prime implicant. We first consider several desirable properties of a prioritized prime implicant for characterizing possibilistic knowledge bases. Some examples show that existing generalizations of prime implicant in possibilistic logic do not satisfy all of these properties. We then provide a novel definition of prioritized prime implicant, which is a set of weighted literals that may be inconsistent. We show that the prioritized prime implicants satisfy all the desirable properties. Finally, we discuss the problem of computing prioritized prime implicants of a possibilistic knowledge base

    RaDON - Repair and Diagnosis in Ontology Networks

    Get PDF
    One of the major challenges in managing networked and dynamic ontologies is to handle inconsistencies in single ontologies, and inconsistencies introduced by integrating multiple distributed ontologies. Our RaDON system provides functionalities to repair and diagnose ontology networks by extending the capabilities of existing reasoners. The system integrates several new debugging and repairing algorithms, such as a relevance-directed algorithm to meet the various needs of the users

    TeGit: Generating High-Quality Instruction-Tuning Data with Text-Grounded Task Design

    Full text link
    High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collection methods are limited by unrealistic manual labeling costs or by the hallucination of relying solely on LLM generation. To address the problems, this paper presents a scalable method to automatically collect high-quality instructional adaptation data by training language models to automatically design tasks based on human-written texts. Intuitively, human-written text helps to help the model attenuate illusions during the generation of tasks. Unlike instruction back-translation-based methods that directly take the given text as a response, we require the model to generate the \textit{instruction}, \textit{input}, and \textit{output} simultaneously to filter the noise. The results of the automated and manual evaluation experiments demonstrate the quality of our dataset.Comment: Work in progres

    Learn from Yesterday: A Semi-Supervised Continual Learning Method for Supervision-Limited Text-to-SQL Task Streams

    Full text link
    Conventional text-to-SQL studies are limited to a single task with a fixed-size training and test set. When confronted with a stream of tasks common in real-world applications, existing methods struggle with the problems of insufficient supervised data and high retraining costs. The former tends to cause overfitting on unseen databases for the new task, while the latter makes a full review of instances from past tasks impractical for the model, resulting in forgetting of learned SQL structures and database schemas. To address the problems, this paper proposes integrating semi-supervised learning (SSL) and continual learning (CL) in a stream of text-to-SQL tasks and offers two promising solutions in turn. The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks. The improved solution SFNet takes advantage of the intrinsic connection between CL and SSL. It uses in-memory past information to help current SSL, while adding high-quality pseudo instances in memory to improve future replay. The experiments on two datasets shows that SFNet outperforms the widely-used SSL-only and CL-only baselines on multiple metrics.Comment: Accepted by AAAI-202

    Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions

    Full text link
    ChatGPT is a powerful large language model (LLM) that has made remarkable progress in natural language understanding. Nevertheless, the performance and limitations of the model still need to be extensively evaluated. As ChatGPT covers resources such as Wikipedia and supports natural language question answering, it has garnered attention as a potential replacement for traditional knowledge based question answering (KBQA) models. Complex question answering is a challenge task of KBQA, which comprehensively tests the ability of models in semantic parsing and reasoning. To assess the performance of ChatGPT as a question answering system (QAS) using its own knowledge, we present a framework that evaluates its ability to answer complex questions. Our approach involves categorizing the potential features of complex questions and describing each test question with multiple labels to identify combinatorial reasoning. Following the black-box testing specifications of CheckList proposed by Ribeiro et.al, we develop an evaluation method to measure the functionality and reliability of ChatGPT in reasoning for answering complex questions. We use the proposed framework to evaluate the performance of ChatGPT in question answering on 8 real-world KB-based CQA datasets, including 6 English and 2 multilingual datasets, with a total of approximately 190,000 test cases. We compare the evaluation results of ChatGPT, GPT-3.5, GPT-3, and FLAN-T5 to identify common long-term problems in LLMs. The dataset and code are available at https://github.com/tan92hl/Complex-Question-Answering-Evaluation-of-ChatGPT
    corecore